Brasília
Kishida to visit France, Brazil and Paraguay starting next week
Prime Minister Fumio Kishida will visit France, Brazil and Paraguay from Wednesday through May 6, the government said Friday. In Paris on Thursday, Kishida plans to give a keynote speech at a ministerial council meeting of the OECD and meet with French President Emmanuel Macron. The speech will reflect Kishida's intention to lead discussions to resolve socio-economic challenges for the international community, Chief Cabinet Secretary Yoshimasa Hayashi said at a news conference. Kishida is also set to deliver speeches at OECD events themed on generative artificial intelligence and on cooperation with Southeast Asia. In Brasilia on May 3, Kishida will meet with President Luiz Inacio Lula da Silva, this year's chair of the Group of 20 major economies, and hold a joint news conference.
Brazil's Upcoming Presidential Elections Are the Most Hate-Filled in Recent Memory
Every other day, my WhatsApp bursts with messages from friends in Brazil and abroad expressing equal parts of excitement and apprehension as Sunday's Brazilian presidential elections approach. On Wednesday, my best friend who lives in the country's capital, Brasília, texted to say she was scared of wearing red clothes to go vote this weekend because red is the color associated with the Worker's Party of former President Luiz Inácio Lula da Silva. Lula, the current front-runner, has a real, if slim, chance to beat far-right incumbent President Jair Bolsonaro in the first round by getting more than 50 percent of valid votes. "The mood is terrible," she wrote, later adding that in the last 48 hours, four instances of political violence had been recorded across the country. My friend's worries are justified.
Nvidia collaborates with the University of Florida to build 700-petaflop AI supercomputer
Nvidia and the University of Florida (UF) today announced plans to build the fastest AI supercomputer in academia. By enhancing the capabilities of UF's existing HiPerGator supercomputer with the DGX SuperPod architecture, Nvidia claims the system -- which it expects will be up and running by early 2021 -- will deliver 700 petaflops (one quadrillion floating point operations per second) of performance. Some researchers within the AI community believe that capable computers, in conjunction with reinforcement learning and other techniques, can achieve paradigm-shifting AI advances. A paper recently published by researchers at the Massachusetts Institute of Technology, MIT-IBM Watson AI Lab, Underwood International College, and the University of Brasilia found that deep learning improvements have been "strongly reliant" on increases in compute. And in 2018, OpenAI researchers released an analysis showing that from 2012 to 2018, the amount of compute used in the largest AI training runs grew more than 300,000 times with a 3.5-month doubling time, far exceeding the pace of Moore's law.
DL Is Not Computationally Expensive By Accident, But By Design
Researchers from MIT recently collaborated with the University of Brasilia and Yonsei University to estimate the computational limits of deep learning (DL). They stated, "The computational needs of deep learning scale so rapidly that they will quickly become burdensome again." The researchers analysed 1,058 research papers from the arXiv pre-print repository and other benchmark references in order to understand how the performance of deep learning techniques depends on the computational power of several important application areas. They stated, "To understand why DL is so computationally expensive, we analyse its statistical as well as computational scaling in theory. We show DL is not computationally expensive by accident, but by design." They added, "The same flexibility that makes it excellent at modelling the diverse phenomena as well as outperforming the expert models also makes it more computationally expensive in nature.
MIT researchers warn that deep learning is reaching its computational limit
The rising demand for Deep Learning is so massive and complex that we are reaching the computational limits of the technology. A recent study suggests that progress in deep learning is heavily dependent on the increase in computational abilities. Researchers from Massachusetts Institute of Technology (MIT), MIT-IBM Watson AI Lab, Underwood International College, and the University of Brasilia found in a recent study that deep learning is strong reliant on the increase in compute. The researchers believe that the continuous progress in Deep Learning will require dramatically more computational methods. In the research paper, co-authors wrote, "We show deep learning is not computationally expensive by accident, but by design. The same flexibility that makes it excellent at modelling diverse phenomena and outperforming expert models also makes it dramatically more computationally expensive. Despite this, we find that the actual computational burden of deep learning models is scaling more rapidly than (known) lower bounds from theory, suggesting that substantial improvements might be possible."
Deep Learning Reaching Computational Limits, Warns New MIT Study
The study states that deep learning's impressive progress has come with a "voracious appetite for computing power." Researchers at the Massachusetts Institute of Technology, MIT-IBM Watson AI Lab, Underwood International College, and the University of Brasilia have found that we are reaching computational limits for deep learning. The new study states that deep learning's progress has come with a "voracious appetite for computing power" and that continued development will require "dramatically" more computationally efficient methods. "We show deep learning is not computationally expensive by accident, but by design. The same flexibility that makes it excellent at modeling diverse phenomena and outperforming expert models also makes it dramatically more computationally expensive," the coauthors wrote.
MIT researchers warn that deep learning is approaching computational limits
That's according to researchers at the Massachusetts Institute of Technology, Underwood International College, and the University of Brasilia, who found in a recent study that progress in deep learning has been "strongly reliant" on increases in compute. It's their assertion that continued progress will require "dramatically" more computationally efficient deep learning methods, either through changes to existing techniques or via new as-yet-undiscovered methods. "We show deep learning is not computationally expensive by accident, but by design. The same flexibility that makes it excellent at modeling diverse phenomena and outperforming expert models also makes it dramatically more computationally expensive," the coauthors wrote. "Despite this, we find that the actual computational burden of deep learning models is scaling more rapidly than (known) lower bounds from theory, suggesting that substantial improvements might be possible."
Robots Are Solving Banks' Very Expensive Research Problem
As lawmakers in Brasilia debated a controversial pension overhaul for months, a robot more than 5,000 miles away in London kept a close eye on all 513 of them. The algorithm, designed by technology startup Arkera Inc., tracked their comments in Brazilian newspapers and government web pages each day to predict the likelihood the bill would pass. Weeks before the legislation cleared its biggest obstacle in July, the machine's data crunching allowed Arkera analysts to predict the result almost to the letter, giving hedge fund clients in New York and London the insight to buy the Brazilian real near eight-month lows in May. It's since rallied more than 8%. This is the kind of edge that a new generation of researchers are betting will upend the research marketplace.